Stochastic Phonology
نویسنده
چکیده
In classic generative phonology, linguistic competence in the area of sound structure is modeled by a phonological grammar. The theory takes a grammatical form because it posits an inventory of categories (such as features, phonemes, syllables or feet) and a set of principles which specify the well-formed combinations of these categories. In any particular language, a particular set of principles delineates phonological well-formedness. By comparing phonologies of diverse languages, we can identify commonalities -both in the categories and in the principles for combining them --which suggest the existence of a universal grammar for sound structure.
منابع مشابه
Optionality and Gradience in Persian Phonology: An Optimality Treatment
The distribution of the allophones of /?/in certain contexts involves free variation and gradient preferences. An organized survey was conducted to elicit the judgments of 37 native Persian speakers concerning the well-formedness of /?/allophonic behavior in five different phonological positions. The results showed that the differences in judgment between the various categories are not just t...
متن کاملComputational Phonology - Part II: Grammars, Learning, and the Future
Computational phonology studies sound patterns in the world’s languages from a computational perspective. This article shows that the similarities between different generative theories outweigh the differences, and discusses stochastic grammars and learning models within phonology from a computational perspective. Also, it shows how the hypothesis that all sound patterns are subregular can be i...
متن کاملFinite-state Transducer Based Phonology and Morphology Modeling with Applications to Hungarian Lvcsr
This article introduces a novel approach to model phonology and morphosyntax in morpheme unit based speech recognizers. The proposed method is evaluated in our recent Hungarian large vocabulary continuous speech recognition (LVCSR) system. The architecture of the recognition system is based on the weighted finite state transducer (WFST) paradigm. The task domain is the recognition of fluently r...
متن کاملNoise robustness and stochastic tolerance of OT error-driven ranking algorithms
Recent counterexamples show that Harmonic Grammar (HG) error-driven learning (with the classical Perceptron reweighing rule) is not robust to noise and does not tolerate the stochastic implementation (Magri 2014, MS). This article guarantees that no analogous counterexamples are possible for proper Optimality Theory (OT) error-driven learners. In fact, a simple extension of the OT convergence a...
متن کاملRandom perturbations of stochastic processes with unbounded variable length memory∗ PIERRE COLLET
We consider binary infinite order stochastic chains perturbed by a random noise. This means that at each time step, the value assumed by the chain can be randomly and independently flipped with a small fixed probability. We show that the transition probabilities of the perturbed chain are uniformly close to the corresponding transition probabilities of the original chain. As a consequence, in t...
متن کامل